深神经网络(DNN)已成为许多应用程序域(包括基于Web的服务)的重要组成部分。这些服务需要高吞吐量和(接近)实时功能,例如,对用户的请求做出反应或反应,或者按时处理传入数据流。但是,DNN设计的趋势是朝着具有许多层和参数的较大模型,以实现更准确的结果。尽管这些模型通常是预先训练的,但是在如此大的模型中,计算复杂性仍然相对显着,从而阻碍了低推断潜伏期。实施缓存机制是用于加速服务响应时间的典型系统工程解决方案。但是,传统的缓存通常不适合基于DNN的服务。在本文中,我们提出了一种端到端自动化解决方案,以根据其计算复杂性和推理延迟来提高基于DNN的服务的性能。我们的缓存方法采用了DNN模型和早期出口的自我介绍的思想。提出的解决方案是一种自动化的在线层缓存机制,如果提前出口之一中的高速缓存模型足够有信心,则可以在推理时间提早退出大型模型。本文的主要贡献之一是,我们将该想法实施为在线缓存,这意味着缓存模型不需要访问培训数据,并且仅根据运行时的传入数据执行,使其适用于应用程序使用预训练的模型。我们的实验在两个下游任务(面部和对象分类)上结果表明,平均而言,缓存可以将这些服务的计算复杂性降低到58 \%(就FLOPS计数而言),并将其推断潜伏期提高到46 \%精度低至零至零。
translated by 谷歌翻译
大多数自动化软件测试任务可以从测试用例的抽象表示中受益。传统上,这是通过基于测试案例的代码覆盖范围来完成的。规范级别的标准可以替换代码覆盖范围以更好地表示测试用例的行为,但通常不具有成本效益。在本文中,我们假设测试用例的执行痕迹可以使其在自动测试任务中抽象其行为的好选择。我们提出了一种新颖的嵌入方法Test2VEC,该方法将测试执行映射到潜在空间。我们在测试案例的优先级(TP)任务中评估了此表示形式。我们的默认TP方法基于嵌入式向量与历史失败测试向量的相似性。我们还根据测试向量的多样性研究了一种替代方案。最后,我们提出了一种决定给定测试套件的方法,以决定选择哪种TP。该实验基于几个真实和种子故障,具有超过一百万个执行痕迹。结果表明,就第一个失败测试案例(FFR)的中位数等级而言,我们提议的TP将最佳替代品提高了41.80%。就中位数APFD和中位数归一化FFR而言,它的表现优于传统代码覆盖范围的方法25.05%和59.25%。
translated by 谷歌翻译
联合学习(FL)是一项广泛采用的分布式学习范例,在实践中,打算在利用所有参与者的整个数据集进行培训的同时保护用户的数据隐私。在FL中,多种型号在用户身上独立培训,集中聚合以在迭代过程中更新全局模型。虽然这种方法在保护隐私方面是优异的,但FL仍然遭受攻击或拜占庭故障等质量问题。最近的一些尝试已经解决了对FL的强大聚集技术的这种质量挑战。然而,最先进的(SOTA)强大的技术的有效性尚不清楚并缺乏全面的研究。因此,为了更好地了解这些SOTA流域的当前质量状态和挑战在存在攻击和故障的情况下,我们进行了大规模的实证研究,以研究SOTA FL的质量,从多个攻击角度,模拟故障(通过突变运算符)和聚合(防御)方法。特别是,我们对两个通用图像数据集和一个现实世界联邦医学图像数据集进行了研究。我们还系统地调查了攻击用户和独立和相同分布的(IID)因子,每个数据集的攻击/故障的分布对鲁棒性结果的影响。经过496个配置进行大规模分析后,我们发现每个用户的大多数突变者对最终模型具有可忽略不计的影响。此外,选择最强大的FL聚合器取决于攻击和数据集。最后,我们说明了可以实现几乎在所有攻击和配置上的任何单个聚合器以及具有简单集合模型的所有攻击和配置的常用解决方案的通用解决方案。
translated by 谷歌翻译
Automated Program Repair (APR) is defined as the process of fixing a bug/defect in the source code, by an automated tool. APR tools have recently experienced promising results by leveraging state-of-the-art Neural Language Processing (NLP) techniques. APR tools such as TFix and CodeXGLUE combine text-to-text transformers with software-specific techniques are outperforming alternatives, these days. However, in most APR studies the train and test sets are chosen from the same set of projects. In reality, however, APR models are meant to be generalizable to new and different projects. Therefore, there is a potential threat that reported APR models with high effectiveness perform poorly when the characteristics of the new project or its bugs are different than the training set's(Domain Shift). In this study, we first define and measure the domain shift problem in automated program repair. Then, we then propose a domain adaptation framework that can adapt an APR model for a given target project. We conduct an empirical study with three domain adaptation methods FullFineTuning, TuningWithLightWeightAdapterLayers, and CurriculumLearning using two state-of-the-art domain adaptation tools (TFix and CodeXGLUE) and two APR models on 611 bugs from 19 projects. The results show that our proposed framework can improve the effectiveness of TFix by 13.05% and CodeXGLUE by 23.4%. Another contribution of this study is the proposal of a data synthesis method to address the lack of labelled data in APR. We leverage transformers to create a bug generator model. We use the generated synthetic data to domain adapt TFix and CodeXGLUE on the projects with no data (Zero-shot learning), which results in an average improvement of 5.76% and 24.42% for TFix and CodeXGLUE, respectively.
translated by 谷歌翻译
Numerous models have tried to effectively embed knowledge graphs in low dimensions. Among the state-of-the-art methods, Graph Neural Network (GNN) models provide structure-aware representations of knowledge graphs. However, they often utilize the information of relations and their interactions with entities inefficiently. Moreover, most state-of-the-art knowledge graph embedding models suffer from scalability issues because of assigning high-dimensional embeddings to entities and relations. To address the above limitations, we propose a scalable general knowledge graph encoder that adaptively involves a powerful tensor decomposition method in the aggregation function of RGCN, a well-known relational GNN model. Specifically, the parameters of a low-rank core projection tensor, used to transform neighborhood entities in the encoder, are shared across relations to benefit from multi-task learning and incorporate relations information effectively. Besides, we propose a low-rank estimation of the core tensor using CP decomposition to compress the model, which is also applicable, as a regularization method, to other similar linear models. We evaluated our model on knowledge graph completion as a common downstream task. We train our model for using a new loss function based on contrastive learning, which relieves the training limitation of the 1-N method on huge graphs. We improved RGCN performance on FB15-237 by 0.42% with considerably lower dimensionality of embeddings.
translated by 谷歌翻译
Building an accurate model of travel behaviour based on individuals' characteristics and built environment attributes is of importance for policy-making and transportation planning. Recent experiments with big data and Machine Learning (ML) algorithms toward a better travel behaviour analysis have mainly overlooked socially disadvantaged groups. Accordingly, in this study, we explore the travel behaviour responses of low-income individuals to transit investments in the Greater Toronto and Hamilton Area, Canada, using statistical and ML models. We first investigate how the model choice affects the prediction of transit use by the low-income group. This step includes comparing the predictive performance of traditional and ML algorithms and then evaluating a transit investment policy by contrasting the predicted activities and the spatial distribution of transit trips generated by vulnerable households after improving accessibility. We also empirically investigate the proposed transit investment by each algorithm and compare it with the city of Brampton's future transportation plan. While, unsurprisingly, the ML algorithms outperform classical models, there are still doubts about using them due to interpretability concerns. Hence, we adopt recent local and global model-agnostic interpretation tools to interpret how the model arrives at its predictions. Our findings reveal the great potential of ML algorithms for enhanced travel behaviour predictions for low-income strata without considerably sacrificing interpretability.
translated by 谷歌翻译
In a spoofing attack, an attacker impersonates a legitimate user to access or tamper with data intended for or produced by the legitimate user. In wireless communication systems, these attacks may be detected by relying on features of the channel and transmitter radios. In this context, a popular approach is to exploit the dependence of the received signal strength (RSS) at multiple receivers or access points with respect to the spatial location of the transmitter. Existing schemes rely on long-term estimates, which makes it difficult to distinguish spoofing from movement of a legitimate user. This limitation is here addressed by means of a deep neural network that implicitly learns the distribution of pairs of short-term RSS vector estimates. The adopted network architecture imposes the invariance to permutations of the input (commutativity) that the decision problem exhibits. The merits of the proposed algorithm are corroborated on a data set that we collected.
translated by 谷歌翻译
Dexterous and autonomous robots should be capable of executing elaborated dynamical motions skillfully. Learning techniques may be leveraged to build models of such dynamic skills. To accomplish this, the learning model needs to encode a stable vector field that resembles the desired motion dynamics. This is challenging as the robot state does not evolve on a Euclidean space, and therefore the stability guarantees and vector field encoding need to account for the geometry arising from, for example, the orientation representation. To tackle this problem, we propose learning Riemannian stable dynamical systems (RSDS) from demonstrations, allowing us to account for different geometric constraints resulting from the dynamical system state representation. Our approach provides Lyapunov-stability guarantees on Riemannian manifolds that are enforced on the desired motion dynamics via diffeomorphisms built on neural manifold ODEs. We show that our Riemannian approach makes it possible to learn stable dynamical systems displaying complicated vector fields on both illustrative examples and real-world manipulation tasks, where Euclidean approximations fail.
translated by 谷歌翻译
In this paper, we increase the availability and integration of devices in the learning process to enhance the convergence of federated learning (FL) models. To address the issue of having all the data in one location, federated learning, which maintains the ability to learn over decentralized data sets, combines privacy and technology. Until the model converges, the server combines the updated weights obtained from each dataset over a number of rounds. The majority of the literature suggested client selection techniques to accelerate convergence and boost accuracy. However, none of the existing proposals have focused on the flexibility to deploy and select clients as needed, wherever and whenever that may be. Due to the extremely dynamic surroundings, some devices are actually not available to serve as clients in FL, which affects the availability of data for learning and the applicability of the existing solution for client selection. In this paper, we address the aforementioned limitations by introducing an On-Demand-FL, a client deployment approach for FL, offering more volume and heterogeneity of data in the learning process. We make use of the containerization technology such as Docker to build efficient environments using IoT and mobile devices serving as volunteers. Furthermore, Kubernetes is used for orchestration. The Genetic algorithm (GA) is used to solve the multi-objective optimization problem due to its evolutionary strategy. The performed experiments using the Mobile Data Challenge (MDC) dataset and the Localfed framework illustrate the relevance of the proposed approach and the efficiency of the on-the-fly deployment of clients whenever and wherever needed with less discarded rounds and more available data.
translated by 谷歌翻译
Channel Attention reigns supreme as an effective technique in the field of computer vision. However, the proposed channel attention by SENet suffers from information loss in feature learning caused by the use of Global Average Pooling (GAP) to represent channels as scalars. Thus, designing effective channel attention mechanisms requires finding a solution to enhance features preservation in modeling channel inter-dependencies. In this work, we utilize Wavelet transform compression as a solution to the channel representation problem. We first test wavelet transform as an Auto-Encoder model equipped with conventional channel attention module. Next, we test wavelet transform as a standalone channel compression method. We prove that global average pooling is equivalent to the recursive approximate Haar wavelet transform. With this proof, we generalize channel attention using Wavelet compression and name it WaveNet. Implementation of our method can be embedded within existing channel attention methods with a couple of lines of code. We test our proposed method using ImageNet dataset for image classification task. Our method outperforms the baseline SENet, and achieves the state-of-the-art results. Our code implementation is publicly available at https://github.com/hady1011/WaveNet-C.
translated by 谷歌翻译